Near-Optimal Distributed Maximum Flow

نویسندگان
چکیده

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Near-Optimal Distributed Tree Embedding

Tree embeddings are a powerful tool in the area of graph approximation algorithms. Roughly speaking, they transform problems on general graphs into much easier ones on trees. Fakcharoenphol, Rao, and Talwar (FRT) [STOC’04] present a probabilistic tree embedding that transforms n-node metrics into (probability distributions over) trees, while stretching each pairwise distance by at most anO(log ...

متن کامل

Near-Optimal Distributed Failure Circumscription

Small failures should only disrupt a small part of a network. One way to do this is by marking the surrounding area as untrustworthy — circumscribing the failure. This can be done with a distributed algorithm using hierarchical clustering and neighbor relations, and the resulting circumscription is near-optimal for convex failures.

متن کامل

Near Optimal Solutions for Maximum Quasi-bicliques

The maximum quasi-biclique problem has been proposed for finding interacting protein group pairs from large protein-protein interaction (PPI) networks. The problem is defined as follows: THE MAXIMUM QUASI-BICLIQUE PROBLEM: Given a bipartite graph G= (X∪ Y,E) and a number 0 < δ ≤ 0.5, find a subset Xopt of X and a subset Yopt of Y such that any vertex x ∈Xopt is incident to at least (1−δ)|Yopt |...

متن کامل

Distributed Optimal Power Flow using ALADIN

The present paper applies the recently proposed Augmented Lagrangian Alternating Direction Inexact Newton (ALADIN) method to solve non-convex AC Optimal Power Flow Problems (OPF) in a distributed fashion. In contrast to the often used Alternaring Direction of Multipliers Method (ADMM), ALADIN guarantees locally quadratic convergence for AC-OPF. Numerical results for IEEE 5–300 bus test cases in...

متن کامل

Near Optimal Coded Data Shuffling for Distributed Learning

Data shuffling between distributed cluster of nodes is one of the critical steps in implementinglarge-scale learning algorithms. Randomly shuffling the data-set among a cluster of workersallows different nodes to obtain fresh data assignments at each learning epoch. This process hasbeen shown to provide improvements in the learning process. However, the statistical benefitso...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: SIAM Journal on Computing

سال: 2018

ISSN: 0097-5397,1095-7111

DOI: 10.1137/17m113277x